Alexandria County
The Justice Department Sues Google Over Its Digital Advertising Dominance
The Justice Department and eight states filed an antitrust suit against Google on Tuesday, seeking to shatter its alleged monopoly on the entire ecosystem of online advertising as a hurtful burden to advertisers, consumers and even the U.S. government. The government alleges that Google's plan to assert dominance has been to "neutralize or eliminate" rivals through acquisitions and to force advertisers to use its products by making it difficult to use competitors' products. The antitrust suit was filed in federal court in Alexandria, Virginia. Attorney General Merrick Garland said in a press conference Tuesday that "for 15 years, Google has pursued a course of anti-competitive conduct" that has halted the rise of rival technologies and manipulated the mechanics of online ad auctions to force advertisers and publishers to use its tools. In so doing, he added, "Google has engaged in exclusionary conduct" that has "severely weakened," if not destroyed, competition in the ad tech industry.
C-Store Artificial Intelligence Is Alive
ALEXANDRIA, Va.--Innovation came to life for the Conexxus Innovation Research Committee (IRC) during a recent field trip that members made to multiple sites in Austin, Texas. One of the hallmarks of the IRC is to experience what's new for the industry firsthand. A visit to a new TXB Stores location in Georgetown, Texas, was on the list not only to taste its breakfast taco but also to see how an artificial intelligence pilot utilizing existing security camera system has progressed. Utilizing SparkCognition's Visual AI Advisor solution, the insights from the location visit were intriguing and revealing. To review the data, we visited with SparkCognition representatives at their offices and HyperWerx lab on a 50-acre site.
How to launch--and scale--a successful AI pilot project
At the US Patent & Trademark Office in Alexandria, Virginia, artificial intelligence (AI) projects are expediting the patent classification process, helping detect fraud, and expanding examiners' searches for similar patents, enabling them to search through more documents in the same amount of time. And every project started with a pilot project. "Proofs of concept (PoCs) are a key approach we use to learn about new technologies, test business value assumptions, de-risk scale project delivery, and inform full production implementation decisions," says USPTO CIO Jamie Holcombe. Once the pilot proves out, he says, the next step is to determine if it can scale. Indian e-commerce vendor Flipkart has followed a similar process before deploying projects that allow for text and visual search through millions of items for customers who speak 11 different languages.
Advance Trustworthy AI and ML, and Identify Best Practices for Scaling AI - AI Trends
Advancing trustworthy AI and machine learning to mitigate agency risk is a priority for the US Department of Energy (DOE), and identifying best practices for implementing AI at scale is a priority for the US General Services Administration (GSA). That's what attendees learned in two sessions at the AI World Government live and virtual event held in Alexandria, Va. last week. Pamela Isom, Director of the AI and Technology Office at the DOE, who spoke on Advancing Trustworthy AI and ML Techniques for Mitigating Agency Risks, has been involved in proliferating the use of AI across the agency for several years. With an emphasis on applied AI and data science, she oversees risk mitigation policies and standards and has been involved with applying AI to save lives, fight fraud, and strengthen the cybersecurity infrastructure. She emphasized the need for the AI project effort to be part of a strategic portfolio.
Digital Natives Seen Having Advantages as Part of Government AI Engineering Teams - AI Trends
AI is more accessible to young people in the workforce who grew up as'digital natives' with Alexa and self-driving cars as part of the landscape, giving them expectations grounded in their experience of what is possible. That idea set the foundation for a panel discussion at AI World Government on Mindset Needs and Skill Set Myths for AI engineering teams, held this week virtually and in-person in Alexandria, Va. "People feel that AI is within their grasp because the technology is available, but the technology is ahead of our cultural maturity," said panel member Dorothy Aronson, CIO and Chief Data Officer for the National Science Foundation. We might have access to big data, but it might not be the right thing to do," to work with it in all cases. Things are accelerating, which is raising expectations. When panel member Vivek Rao, lecturer and researcher at the University of California at Berkeley, was working on his PhD, a paper on natural language processing might be a master's thesis. "Now we assign it as a homework assignment with a two-day turnaround.
Getting Government AI Engineers to Tune into AI Ethics Seen as Challenge - AI Trends
Engineers tend to see things in unambiguous terms, which some may call Black and White terms, such as a choice between right or wrong and good and bad. The consideration of ethics in AI is highly nuanced, with vast gray areas, making it challenging for AI software engineers to apply it in their work. That was a takeaway from a session on the Future of Standards and Ethical AI at the AI World Government conference held in-person and virtually in Alexandria, Va. this week. An overall impression from the conference is that the discussion of AI and ethics is happening in virtually every quarter of AI in the vast enterprise of the federal government, and the consistency of points being made across all these different and independent efforts stood out. "We engineers often think of ethics as a fuzzy thing that no one has really explained," stated Beth-Anne Schuelke-Leech, an associate professor, Engineering Management and Entrepreneurship at the University of Windsor, Ontario, Canada, speaking at the Future of Ethical AI session.
How Accountability Practices Are Pursued by AI Engineers in the Federal Government - AI Trends
Two experiences of how AI developers within the federal government are pursuing AI accountability practices were outlined at the AI World Government event held virtually and in-person this week in Alexandria, Va. Taka Ariga, chief data scientist and director at the US Government Accountability Office, described an AI accountability framework he uses within his agency and plans to make available to others. And Bryce Goodman, chief strategist for AI and machine learning at the Defense Innovation Unit (DIU), a unit of the Department of Defense founded to help the US military make faster use of emerging commercial technologies, described work in his unit to apply principles of AI development to terminology that an engineer can apply. Ariga, the first chief data scientist appointed to the US Government Accountability Office and director of the GAO's Innovation Lab, discussed an AI Accountability Framework he helped to develop by convening a forum of experts in the government, industry, nonprofits, as well as federal inspector general officials and AI experts. "We are adopting an auditor's perspective on the AI accountability framework," Ariga said.
US judge rules only humans, not AI, can get patents
The big picture: A US judge ruled this week that an artificial intelligence cannot be listed as the inventor of a patent. This ruling is the latest on an issue that has come before judges in multiple countries. A court in Alexandria, Virginia, ruled that inventions can only be patented under the name of a "natural person." The decision was made against someone who tried to list two designs under the name of an AI as part of a broader project to gain worldwide recognition of AI-powered inventions. Imagination Engines, Inc. CEO Stephen Thaler built an AI called DEBUS, which independently designed a new kind of drink holder and flashing light (used to get someone's attention). The name "DEBUS," along with "Invention generated by artificial intelligence," was used in the attempted patent filing for the inventions.
Only Humans, Not AI Machines, Get a U.S. Patent, Judge Says
A computer using artificial intelligence can't be listed as an inventor on patents because only a human can be an inventor under U.S. law, a federal judge ruled in the first American decision that's part of a global debate over how to handle computer-created innovation. Federal law requires that an "individual" take an oath that he or she is the inventor on a patent application, and both the dictionary and legal definition of an individual is a natural person, ruled U.S. District Judge Leonie Brinkema in Alexandria, Virginia.
Analyst pleads to leaking secrets about drone program
Fox News Flash top headlines are here. Check out what's clicking on Foxnews.com. A former Air Force intelligence analyst pleaded guilty Wednesday to leaking classified documents to a reporter about military drone strikes against al-Qaida and other terrorist targets. The guilty plea from Daniel Hale, 33, of Nashville, Tennessee, comes just days before he was slated to go on trial in federal court in Alexandria, Virginia, for violating the World War I-era Espionage Act. Hale admitted leaking roughly a dozen secret and top-secret documents to a reporter in 2014 and 2015, when he was working for a contractor as an analyst at the National Geospatial-Intelligence Agency (NGA).